Adaptive step-length selection in gradient boosting for Gaussian location and scale models
نویسندگان
چکیده
Abstract Tuning of model-based boosting algorithms relies mainly on the number iterations, while step-length is fixed at a predefined value. For complex models with several predictors such as Generalized additive for location, scale and shape (GAMLSS), imbalanced updates predictors, where some distribution parameters are updated more frequently than others, can be problem that prevents submodels to appropriately fitted within limited iterations. We propose an approach using adaptive (ASL) determination non-cyclical algorithm Gaussian location models, important special case wider class GAMLSS, prevent imbalance. Moreover, we discuss properties ASL derive semi-analytical form avoids manual selection search interval numerical optimization find optimal step-length, consequently improves computational efficiency. show competitive behavior proposed approaches compared penalized maximum likelihood in two simulations applications, particular cases large variance and/or variables observations. In addition, underlying concept also applicable whole GAMLSS framework other one predictor like zero-inflated count brings up insights into choice reasonable defaults simpler (Gaussian) models.
منابع مشابه
Adaptive sampling for large scale boosting
Classical Boosting algorithms, such as AdaBoost, build a strong classifier without concern for the computational cost. Some applications, in particular in computer vision, may involve millions of training examples and very large feature spaces. In such contexts, the training time of off-the-shelf Boosting algorithms may become prohibitive. Several methods exist to accelerate training, typically...
متن کاملGEFCom2012 Hierarchical load forecasting: Gradient boosting machines and Gaussian processes
This report discusses methods for forecasting hourly loads of a US utility as part of the load forecasting track of the Global Energy Forecasting Competition 2012 hosted on Kaggle. The methods described (gradient boosting machines and Gaussian processes) are generic machine learning / regression algorithms and few domain specific adjustments were made. Despite this, the algorithms were able to ...
متن کاملAdaptive Step-Size for Policy Gradient Methods
In the last decade, policy gradient methods have significantly grown in popularity in the reinforcement–learning field. In particular, they have been largely employed in motor control and robotic applications, thanks to their ability to cope with continuous state and action domains and partial observable problems. Policy gradient researches have been mainly focused on the identification of effe...
متن کاملA stochastic gradient adaptive filter with gradient adaptive step size
This paper presents an adaptive step-size gradient adaptive filter. The step size of the adaptive filter is changed according to a gradient descent algorithm designed to reduce the squared estimation error during each iteration. An approximate analysis of the performance of the adaptive filter when its inputs are zero mean, white, and Gaussian and the set of optimal coefficients are time varyin...
متن کاملsimulation and experimental studies for prediction mineral scale formation in oil field during mixing of injection and formation water
abstract: mineral scaling in oil and gas production equipment is one of the most important problem that occurs while water injection and it has been recognized to be a major operational problem. the incompatibility between injected and formation waters may result in inorganic scale precipitation in the equipment and reservoir and then reduction of oil production rate and water injection rate. ...
ذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Statistics
سال: 2022
ISSN: ['0943-4062', '1613-9658']
DOI: https://doi.org/10.1007/s00180-022-01199-3